Goto

Collaborating Authors

 Wyandotte County


A 100 Billion Chip Project Forced a 91-Year-Old Woman From Her Home

WIRED

Azalia King was the last holdout preventing the construction of a Micron megafab. Onondaga County authorities threatened to use eminent domain to take her home away by force. Azalia King moved into an upstate New York home surrounded by sprawling cattle pastures around 1965, about the time that mass production of the world's first microchips began. Now, 60 years later, the 91-year-old is on the verge of losing her home to make way for what could become the largest chipmaking complex in the US. Local authorities threatened to exercise their power of eminent domain, or taking land for public benefit, to forcibly uproot King and proceed with construction on a $100 billion campus where US tech giant Micron plans to make memory chips for use in a variety of electronics.


US Border Patrol Is Spying on Millions of American Drivers

WIRED

Plus: The SEC lets SolarWinds off the hook, Microsoft stops a historic DDoS attack, and FBI documents reveal the agency spied on an immigration activist Signal group in New York City. Eight years after a researcher warned WhatsApp that it was possible to extract user phone numbers en masse from the Meta-owned app, another team of researchers found that they could still do exactly that using a similar technique. The issue stems from WhatsApp's discovery feature, which allows someone to enter a person's phone number to see if they're on the app. By doing this billions of times--which WhatsApp did not prevent--researchers from the University of Vienna uncovered what they're calling "the most extensive exposure of phone numbers" ever . Vaping is a major problem in US high schools.


OpenAI Signs 38 Billion Deal With Amazon

WIRED

OpenAI has committed to buying billions of dollars worth of compute from AWS--the latest in a string of major deals brokered by the AI startup. OpenAI has signed a multi-year deal with Amazon to buy $38 billion worth of AWS cloud infrastructure to train its models and serve its users. The deal is yet another sign of the AI industry becoming increasingly entangled, with OpenAI now at the center of major partnerships with industry players including Google, Oracle, Nvidia, and AMD. The AWS agreement is also notable because OpenAI rose to prominence in part through its partnership with Microsoft--Amazon's biggest cloud rival. Amazon is also a major backer of one of OpenAI's key competitors, Anthropic.


Your Friend Asked You a Question. Don't Copy and Paste an Answer From a Chatbot

WIRED

Your Friend Asked You a Question. Your friend came to you because they respect your knowledge and opinion, and outsourcing the answer to a machine is lazy and rude. Back in the 2010s, a website called Let Me Google That For You gained a notable amount of popularity for serving a single purpose: snark. The site lets you generate a custom link that you can send somebody who asks you a question. When they click the link, it plays an animation of the process of typing a question into Google.


MDF-MLLM: Deep Fusion Through Cross-Modal Feature Alignment for Contextually Aware Fundoscopic Image Classification

Jordan, Jason, Lor, Mohammadreza Akbari, Koulen, Peter, Shyu, Mei-Ling, Chen, Shu-Ching

arXiv.org Artificial Intelligence

This study aimed to enhance disease classification accuracy from retinal fundus images by integrating fine-grained image features and global textual context using a novel multimodal deep learning architecture. Existing multimodal large language models (MLLMs) often struggle to capture low-level spatial details critical for diagnosing retinal diseases such as glaucoma, diabetic retinopathy, and retinitis pigmentosa. This model development and validation study was conducted on 1,305 fundus image-text pairs compiled from three public datasets (FIVES, HRF, and StoneRounds), covering acquired and inherited retinal diseases, and evaluated using classification accuracy and F1-score. The MDF-MLLM integrates skip features from four U-Net encoder layers into cross-attention blocks within a LLaMA 3.2 11B MLLM. Vision features are patch-wise projected and fused using scaled cross-attention and FiLM-based U-Net modulation. Baseline MLLM achieved 60% accuracy on the dual-type disease classification task. MDF-MLLM, with both U-Net and MLLM components fully fine-tuned during training, achieved a significantly higher accuracy of 94%, representing a 56% improvement. Recall and F1-scores improved by as much as 67% and 35% over baseline, respectively. Ablation studies confirmed that the multi-depth fusion approach contributed to substantial gains in spatial reasoning and classification, particularly for inherited diseases with rich clinical text. MDF-MLLM presents a generalizable, interpretable, and modular framework for fundus image classification, outperforming traditional MLLM baselines through multi-scale feature fusion. The architecture holds promise for real-world deployment in clinical decision support systems. Future work will explore synchronized training techniques, a larger pool of diseases for more generalizability, and extending the model for segmentation tasks.


ECG Latent Feature Extraction with Autoencoders for Downstream Prediction Tasks

Harvey, Christopher, Shomaji, Sumaiya, Yao, Zijun, Noheria, Amit

arXiv.org Artificial Intelligence

The electrocardiogram (ECG) is an inexpensive and widely available tool for cardiac assessment. Despite its standardized format and small file size, the high complexity and inter-individual variability of ECG signals (typically a 60,000-size vector with 12 leads at 500 Hz) make it challenging to use in deep learning models, especially when only small training datasets are available. This study addresses these challenges by exploring feature generation methods from representative beat ECGs, focusing on Principal Component Analysis (PCA) and Autoencoders to reduce data complexity. We introduce three novel Variational Autoencoder (VAE) variants-Stochastic Autoencoder (SAE), Annealed beta-VAE (A beta-VAE), and Cyclical beta VAE (C beta-VAE)-and compare their effectiveness in maintaining signal fidelity and enhancing downstream prediction tasks using a Light Gradient Boost Machine (LGBM). The A beta-VAE achieved superior signal reconstruction, reducing the mean absolute error (MAE) to 15.7+/-3.2 muV, which is at the level of signal noise. Moreover, the SAE encodings, when combined with traditional ECG summary features, improved the prediction of reduced Left Ventricular Ejection Fraction (LVEF), achieving an holdout test set area under the receiver operating characteristic curve (AUROC) of 0.901 with a LGBM classifier. This performance nearly matches the 0.909 AUROC of state-of-the-art CNN model but requires significantly less computational resources. Further, the ECG feature extraction-LGBM pipeline avoids overfitting and retains predictive performance when trained with less data. Our findings demonstrate that these VAE encodings are not only effective in simplifying ECG data but also provide a practical solution for applying deep learning in contexts with limited-scale labeled training data.


An Oversampling-enhanced Multi-class Imbalanced Classification Framework for Patient Health Status Prediction Using Patient-reported Outcomes

Yan, Yang, Chen, Zhong, Xu, Cai, Shen, Xinglei, Shiao, Jay, Einck, John, Chen, Ronald C, Gao, Hao

arXiv.org Artificial Intelligence

Patient-reported outcomes (PROs) directly collected from cancer patients being treated with radiation therapy play a vital role in assisting clinicians in counseling patients regarding likely toxicities. Precise prediction and evaluation of symptoms or health status associated with PROs are fundamental to enhancing decision-making and planning for the required services and support as patients transition into survivorship. However, the raw PRO data collected from hospitals exhibits some intrinsic challenges such as incomplete item reports and imbalance patient toxicities. To the end, in this study, we explore various machine learning techniques to predict patient outcomes related to health status such as pain levels and sleep discomfort using PRO datasets from a cancer photon/proton therapy center. Specifically, we deploy six advanced machine learning classifiers -- Random Forest (RF), XGBoost, Gradient Boosting (GB), Support Vector Machine (SVM), Multi-Layer Perceptron with Bagging (MLP-Bagging), and Logistic Regression (LR) -- to tackle a multi-class imbalance classification problem across three prevalent cancer types: head and neck, prostate, and breast cancers. To address the class imbalance issue, we employ an oversampling strategy, adjusting the training set sample sizes through interpolations of in-class neighboring samples, thereby augmenting minority classes without deviating from the original skewed class distribution. Our experimental findings across multiple PRO datasets indicate that the RF and XGB methods achieve robust generalization performance, evidenced by weighted AUC and detailed confusion matrices, in categorizing outcomes as mild, intermediate, and severe post-radiation therapy. These results underscore the models' effectiveness and potential utility in clinical settings.


Comparison of Autoencoder Encodings for ECG Representation in Downstream Prediction Tasks

Harvey, Christopher J., Shomaji, Sumaiya, Yao, Zijun, Noheria, Amit

arXiv.org Artificial Intelligence

The electrocardiogram (ECG) is an inexpensive and widely available tool for cardiovascular assessment. Despite its standardized format and small file size, the high complexity and inter-individual variability of ECG signals (typically a 60,000-size vector) make it challenging to use in deep learning models, especially when only small datasets are available. This study addresses these challenges by exploring feature generation methods from representative beat ECGs, focusing on Principal Component Analysis (PCA) and Autoencoders to reduce data complexity. We introduce three novel Variational Autoencoder (VAE) variants: Stochastic Autoencoder (SAE), Annealed beta-VAE (Abeta-VAE), and cyclical beta-VAE (Cbeta-VAE), and compare their effectiveness in maintaining signal fidelity and enhancing downstream prediction tasks. The Abeta-VAE achieved superior signal reconstruction, reducing the mean absolute error (MAE) to 15.7 plus-minus 3.2 microvolts, which is at the level of signal noise. Moreover, the SAE encodings, when combined with ECG summary features, improved the prediction of reduced Left Ventricular Ejection Fraction (LVEF), achieving an area under the receiver operating characteristic curve (AUROC) of 0.901. This performance nearly matches the 0.910 AUROC of state-of-the-art CNN models but requires significantly less data and computational resources. Our findings demonstrate that these VAE encodings are not only effective in simplifying ECG data but also provide a practical solution for applying deep learning in contexts with limited-scale labeled training data.


Environment Scan of Generative AI Infrastructure for Clinical and Translational Science

Idnay, Betina, Xu, Zihan, Adams, William G., Adibuzzaman, Mohammad, Anderson, Nicholas R., Bahroos, Neil, Bell, Douglas S., Bumgardner, Cody, Campion, Thomas, Castro, Mario, Cimino, James J., Cohen, I. Glenn, Dorr, David, Elkin, Peter L, Fan, Jungwei W., Ferris, Todd, Foran, David J., Hanauer, David, Hogarth, Mike, Huang, Kun, Kalpathy-Cramer, Jayashree, Kandpal, Manoj, Karnik, Niranjan S., Katoch, Avnish, Lai, Albert M., Lambert, Christophe G., Li, Lang, Lindsell, Christopher, Liu, Jinze, Lu, Zhiyong, Luo, Yuan, McGarvey, Peter, Mendonca, Eneida A., Mirhaji, Parsa, Murphy, Shawn, Osborne, John D., Paschalidis, Ioannis C., Harris, Paul A., Prior, Fred, Shaheen, Nicholas J., Shara, Nawar, Sim, Ida, Tachinardi, Umberto, Waitman, Lemuel R., Wright, Rosalind J., Zai, Adrian H., Zheng, Kai, Lee, Sandra Soo-Jin, Malin, Bradley A., Natarajan, Karthik, Price, W. Nicholson II, Zhang, Rui, Zhang, Yiye, Xu, Hua, Bian, Jiang, Weng, Chunhua, Peng, Yifan

arXiv.org Artificial Intelligence

This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the Clinical and Translational Science Award (CTSA) Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. With the rapid advancement of GenAI technologies, including large language models (LLMs), healthcare institutions face unprecedented opportunities and challenges. This research explores the current status of GenAI integration, focusing on stakeholder roles, governance structures, and ethical considerations by administering a survey among leaders of health institutions (i.e., representing academic medical centers and health systems) to assess the institutional readiness and approach towards GenAI adoption. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The study highlights significant variations in governance models, with a strong preference for centralized decision-making but notable gaps in workforce training and ethical oversight. Moreover, the results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. Our analysis also reveals concerns regarding GenAI bias, data security, and stakeholder trust, which must be addressed to ensure the ethical and effective implementation of GenAI technologies. This study offers valuable insights into the challenges and opportunities of GenAI integration in healthcare, providing a roadmap for institutions aiming to leverage GenAI for improved quality of care and operational efficiency.


MetaGreen: Meta-Learning Inspired Transformer Selection for Green Semantic Communication

Mukherjee, Shubhabrata, Beard, Cory, Song, Sejun

arXiv.org Artificial Intelligence

Semantic Communication can transform the way we transmit information, prioritizing meaningful and effective content over individual symbols or bits. This evolution promises significant benefits, including reduced latency, lower bandwidth usage, and higher throughput compared to traditional communication. However, the development of Semantic Communication faces a crucial challenge: the need for universal metrics to benchmark the joint effects of semantic information loss and energy consumption. This research introduces an innovative solution: the ``Energy-Optimized Semantic Loss'' (EOSL) function, a novel multi-objective loss function that effectively balances semantic information loss and energy consumption. Through comprehensive experiments on transformer models, including energy benchmarking, we demonstrate the remarkable effectiveness of EOSL-based model selection. We have established that EOSL-based transformer model selection achieves up to 83\% better similarity-to-power ratio (SPR) compared to BLEU score-based selection and 67\% better SPR compared to solely lowest power usage-based selection. Furthermore, we extend the applicability of EOSL to diverse and varying contexts, inspired by the principles of Meta-Learning. By cumulatively applying EOSL, we enable the model selection system to adapt to this change, leveraging historical EOSL values to guide the learning process. This work lays the foundation for energy-efficient model selection and the development of green semantic communication.